Bert Large Uncased Sparse 90 Unstructured Pruneofa
Apache-2.0
This model is a sparse pre-trained model achieving 90% sparsity through weight pruning techniques, suitable for fine-tuning on various language tasks.
Large Language Model
Transformers English